explainable artificial intelligence model
Explainable Artificial Intelligence Model for Evaluating Shear Strength Parameters of Municipal Solid Waste Across Diverse Compositional Profiles
Suknark, Parichat, Youwaib, Sompote, Kitkobsin, Tipok, Towprayoon, Sirintornthep, Chiemchaisri, Chart, Wangyao, Komsilp
Accurate prediction of shear strength parameters in Municipal Solid Waste (MSW) remains a critical challenge in geotechnical engineering due to the heterogeneous nature of waste materials and their temporal evolution through degradation processes. This paper presents a novel explainable artificial intelligence (XAI) framework for evaluating cohesion and friction angle across diverse MSW compositional profiles. The proposed model integrates a multi-layer perceptron architecture with SHAP (SHapley Additive exPlanations) analysis to provide transparent insights into how specific waste components influence strength characteristics. Training data encompassed large-scale direct shear tests across various waste compositions and degradation states. The model demonstrated superior predictive accuracy compared to traditional gradient boosting methods, achieving mean absolute percentage errors of 7.42% and 14.96% for friction angle and cohesion predictions, respectively. Through SHAP analysis, the study revealed that fibrous materials and particle size distribution were primary drivers of shear strength variation, with food waste and plastics showing significant but non-linear effects. The model's explainability component successfully quantified these relationships, enabling evidence-based recommendations for waste management practices. This research bridges the gap between advanced machine learning and geotechnical engineering practice, offering a reliable tool for rapid assessment of MSW mechanical properties while maintaining interpretability for engineering decision-making.
- Water & Waste Management > Solid Waste Management (1.00)
- Energy > Oil & Gas > Upstream (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Perceptrons (0.54)
The Challenge of Imputation in Explainable Artificial Intelligence Models
Ahmad, Muhammad Aurangzeb, Eckert, Carly, Teredesai, Ankur
Even though the field of Artificial Intelligence is more than sixty years old, it is only in the last decade or so that AI systems are being increasingly interwoven into the fabric of the socio-technical apparatus of the society and are thus having a massive impact on society. This increasing incorporation of AI has led to increased calls for accountability and regulation of AI systems [8]. Model explanations are considered to be one of the most important ways to provide accountability of AI systems. The model explanations, however, can only be as good as the data on which the algorithms are based. This is where the issue of missing and imputed data becomes pivotal for model explanations. In some domains like healthcare, almost all datasets have missing values [6]. As many applications of AI in healthcare are patient-oriented, decisions that are informed by AI and ML models can potentially have significant clinical consequences.